Search Results

Documents authored by Raghavan, S.


Found 2 Possible Name Variants:

Raghavan, S.

Document
Fair Payments for Efficient Allocations in Public Sector Combinatorial Auctions

Authors: Robert Day and S. Raghavan

Published in: Dagstuhl Seminar Proceedings, Volume 5011, Computing and Markets (2005)


Abstract
Motivated by the increasing use of auctions by government agencies, we consider the problem of fairly pricing public goods in a combinatorial auction. A well-known problem with the incentive-compatible Vickrey-Clarke-Groves (VCG) auction mechanism is that the resulting prices may not be in the core. Loosely speaking, this means the payments of the winners could be so low, that there are losing bidders who would have been willing to pay more than the payments of the winning bidders. Clearly, this ``unfair'' outcome is unacceptable for a public-sector auction. Proxy-based combinatorial auctions, in which each bidder submits several package bids to a proxy, result in efficient outcomes and bidder-Pareto-optimal core-payments by winners, thus offering a viable practical alternative to address this problem. This paper confronts two critical issues facing the proxy-auction. First, motivated to minimize a bidder's ability to benefit through strategic manipulation (through collusive agreement or unilateral action), we demonstrate the strength of a mechanism that minimizes total payments among all possible proxy auction outcomes, narrowing the previously broad solution concept. Secondly, we address the computational difficulties of achieving these outcomes with a constraint-generation approach, promising to broaden the range of applications for which the proxy-auction achieves a comfortably rapid solution.

Cite as

Robert Day and S. Raghavan. Fair Payments for Efficient Allocations in Public Sector Combinatorial Auctions. In Computing and Markets. Dagstuhl Seminar Proceedings, Volume 5011, pp. 1-29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2005)


Copy BibTex To Clipboard

@InProceedings{day_et_al:DagSemProc.05011.9,
  author =	{Day, Robert and Raghavan, S.},
  title =	{{Fair Payments for Efficient Allocations in Public Sector Combinatorial Auctions}},
  booktitle =	{Computing and Markets},
  pages =	{1--29},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2005},
  volume =	{5011},
  editor =	{Daniel Lehmann and Rudolf M\"{u}ller and Tuomas Sandholm},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.05011.9},
  URN =		{urn:nbn:de:0030-drops-1832},
  doi =		{10.4230/DagSemProc.05011.9},
  annote =	{Keywords: auctions , core , bidder-Pareto-optimal , constraint generation , VCG payments , proxy auctions , combinatorial auctions}
}

Raghavan, Manish

Document
Selection Problems in the Presence of Implicit Bias

Authors: Jon Kleinberg and Manish Raghavan

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
Over the past two decades, the notion of implicit bias has come to serve as an important com- ponent in our understanding of bias and discrimination in activities such as hiring, promotion, and school admissions. Research on implicit bias posits that when people evaluate others - for example, in a hiring context - their unconscious biases about membership in particular demo- graphic groups can have an effect on their decision-making, even when they have no deliberate intention to discriminate against members of these groups. A growing body of experimental work has demonstrated the effect that implicit bias can have in producing adverse outcomes. Here we propose a theoretical model for studying the effects of implicit bias on selection decisions, and a way of analyzing possible procedural remedies for implicit bias within this model. A canonical situation represented by our model is a hiring setting, in which recruiters are trying to evaluate the future potential of job applicants, but their estimates of potential are skewed by an unconscious bias against members of one group. In this model, we show that measures such as the Rooney Rule, a requirement that at least one member of an underrepresented group be selected, can not only improve the representation of the affected group, but also lead to higher payoffs in absolute terms for the organization performing the recruiting. However, identifying the conditions under which such measures can lead to improved payoffs involves subtle trade- offs between the extent of the bias and the underlying distribution of applicant characteristics, leading to novel theoretical questions about order statistics in the presence of probabilistic side information.

Cite as

Jon Kleinberg and Manish Raghavan. Selection Problems in the Presence of Implicit Bias. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 33:1-33:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kleinberg_et_al:LIPIcs.ITCS.2018.33,
  author =	{Kleinberg, Jon and Raghavan, Manish},
  title =	{{Selection Problems in the Presence of Implicit Bias}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{33:1--33:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.33},
  URN =		{urn:nbn:de:0030-drops-83234},
  doi =		{10.4230/LIPIcs.ITCS.2018.33},
  annote =	{Keywords: algorithmic fairness, power laws, order statistics, implicit bias, Rooney Rule}
}
Document
Inherent Trade-Offs in the Fair Determination of Risk Scores

Authors: Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan

Published in: LIPIcs, Volume 67, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)


Abstract
Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We formalize three fairness conditions that lie at the heart of these debates, and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Moreover, even satisfying all three conditions approximately requires that the data lie in an approximate version of one of the constrained special cases identified by our theorem. These results suggest some of the ways in which key notions of fairness are incompatible with each other, and hence provide a framework for thinking about the trade-offs between them.

Cite as

Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent Trade-Offs in the Fair Determination of Risk Scores. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 43:1-43:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{kleinberg_et_al:LIPIcs.ITCS.2017.43,
  author =	{Kleinberg, Jon and Mullainathan, Sendhil and Raghavan, Manish},
  title =	{{Inherent Trade-Offs in the Fair Determination of Risk Scores}},
  booktitle =	{8th Innovations in Theoretical Computer Science Conference (ITCS 2017)},
  pages =	{43:1--43:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-029-3},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{67},
  editor =	{Papadimitriou, Christos H.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2017.43},
  URN =		{urn:nbn:de:0030-drops-81560},
  doi =		{10.4230/LIPIcs.ITCS.2017.43},
  annote =	{Keywords: algorithmic fairness, risk tools, calibration}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail